32 research outputs found

    Unsupervised feature-learning for galaxy SEDs with denoising autoencoders

    Full text link
    With the increasing number of deep multi-wavelength galaxy surveys, the spectral energy distribution (SED) of galaxies has become an invaluable tool for studying the formation of their structures and their evolution. In this context, standard analysis relies on simple spectro-photometric selection criteria based on a few SED colors. If this fully supervised classification already yielded clear achievements, it is not optimal to extract relevant information from the data. In this article, we propose to employ very recent advances in machine learning, and more precisely in feature learning, to derive a data-driven diagram. We show that the proposed approach based on denoising autoencoders recovers the bi-modality in the galaxy population in an unsupervised manner, without using any prior knowledge on galaxy SED classification. This technique has been compared to principal component analysis (PCA) and to standard color/color representations. In addition, preliminary results illustrate that this enables the capturing of extra physically meaningful information, such as redshift dependence, galaxy mass evolution and variation over the specific star formation rate. PCA also results in an unsupervised representation with physical properties, such as mass and sSFR, although this representation separates out. less other characteristics (bimodality, redshift evolution) than denoising autoencoders.Comment: 11 pages and 15 figures. To be published in A&

    Convergent ADMM Plug and Play PET Image Reconstruction

    Full text link
    In this work, we investigate hybrid PET reconstruction algorithms based on coupling a model-based variational reconstruction and the application of a separately learnt Deep Neural Network operator (DNN) in an ADMM Plug and Play framework. Following recent results in optimization, fixed point convergence of the scheme can be achieved by enforcing an additional constraint on network parameters during learning. We propose such an ADMM algorithm and show in a realistic [18F]-FDG synthetic brain exam that the proposed scheme indeed lead experimentally to convergence to a meaningful fixed point. When the proposed constraint is not enforced during learning of the DNN, the proposed ADMM algorithm was observed experimentally not to converge

    GREAT3 results I: systematic errors in shear estimation and the impact of real galaxy morphology

    Get PDF
    We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically-varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially-varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by 1\sim 1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the S\'{e}rsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods' results support the simple model in which additive shear biases depend linearly on PSF ellipticity.Comment: 32 pages + 15 pages of technical appendices; 28 figures; submitted to MNRAS; latest version has minor updates in presentation of 4 figures, no changes in content or conclusion

    GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    Get PDF
    We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ∼1percent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods' results support the simple model in which additive shear biases depend linearly on PSF ellipticit

    Exploitation de corrélations spatiales et temporelles en tomographie par émission de positrons

    Get PDF
    In this thesis we propose, implement, and evaluate algorithms improving spatial resolution in reconstructed images and reducing data noise in positron emission tomography imaging. These algorithms have been developed for a high resolution tomograph (HRRT) and applied to brain imaging, but can be used for other tomographs or studies. We first developed an iterative reconstruction algorithm including a stationary and isotropic model of resolution in image space, experimentally measured. We evaluated the impact of such a model of resolution in Monte-Carlo simulations, physical phantom experiments and in two clinical studies by comparing our algorithm with a reference reconstruction algorithm. This study suggests that biases due to partial volume effects are reduced, in particular in the clinical studies. Better spatial and temporal correlations are also found at the voxel level. However, other methods should be developed to further reduce data noise. We then proposed a maximum a posteriori denoising algorithm that can be used for dynamic data to denoise temporally raw data (sinograms) or reconstructed images. The a priori modeled the coefficients in a wavelet basis of all the signals without noise (in an image or sinogram). We compared this technique with a reference denoising method on replicated simulations. This illustrates the potential benefits of our approach of sinogram denoising.Nous proposons, implémentons et évaluons dans cette thèse des algorithmes permettant d'améliorer la résolution spatiale dans les images et débruiter les données en tomographie par émission de positrons. Ces algorithmes ont été employés pour des reconstructions sur une caméra à haute résolution (HRRT) et utilisés dans le cadre d'études cérébrales, mais la méthodologie développée peut être appliquée à d'autres caméras et dans d'autres situations. Dans un premier temps, nous avons développé une méthode de reconstruction itérative intégrant un modèle de résolution spatiale isotropique et stationnaire dans l'espace image, mesuré expérimentalement. Nous avons évalué les apports de cette méthode sur simulation Monte-Carlo, examens physiques et protocoles cliniques en la comparant à un algorithme de reconstruction de référence. Cette étude suggère une réduction des biais de quantification, en particulier dans l'étude clinique, et de meilleures corrélations spatialles et temporelles au niveau des voxels. Cependant, d'autres méthodes doivent être employées pour réduire le niveau de bruit dans les données. Dans un second temps, une approche de débruitage maximum a posteriori adaptée aux données dynamiques et permettant de débruiter temporellement les données d'acquisition (sinogrammes) ou images reconstruites a été proposé. L'a priori a été introduit en modélisant les coefficients dans une base d'ondelettes de l'ensemble des signaux recherchés (images ou sinogrammes). Nous avons comparé cette méthode à une méthode de débruitage de référence sur des simulations répliquées, ce qui illustre l'intérêt de l'approche de débruitage des sinogrammes

    Exploitation de corrélations spatiales et temporelles en tomographie par émission de positrons

    No full text
    Nous proposons, implémentons et évaluons dans cette thèse des algorithmes permettant d'améliorer la résolution spatiale dans les images et débruiter les données en tomographie par émission de positrons. Ces algorithmes ont été employés pour des reconstructions sur une caméra à haute résolution (HRRT) et utilisés dans le cadre d'études cérébrales, mais la méthodologie développée peut être appliquée à d'autres caméras et dans d'autres situations. Dans un premier temps, nous avons développé une méthode de reconstruction itérative intégrant un modèle de résolution spatiale isotrope et stationnaire dans l'espace image, mesuré expérimentalement. Nous avons évalué les apports de cette méthode sur simulation Monte-Carlo, examens physiques et protocoles cliniques en la comparant à un algorithme de reconstruction de référence. Cette étude suggère une réduction des biais de quantification, en particulier dans l'étude clinique, et de meilleures corrélations spatiales et temporelles au niveau des voxels. Cependant, d'autres méthodes doivent être employées pour réduire le niveau de bruit dans les données. Dans un second temps, une approche de débruitage maximum a posteriori adaptée aux données dynamiques et permettant de débruiter temporellement les données d'acquisition (sinogrammes) ou images reconstruites a été proposé. L'a priori a été introduit en modélisant les coefficients dans une base d'ondelettes de l'ensemble des signaux recherchés (images ou sinogrammes). Nous avons comparé cette méthode à une méthode de débruitage de référence sur des simulations répliquées, ce qui illustre l'intérêt de l'approche de débruitage des sinogrammes.In this thesis we propose, implement, and evaluate algorithms improving spatial resolution in reconstructed images and reducing data noise in positron emission tomography imaging. These algorithms have been developed for a high resolution tomograph (HRRT) and applied to brain imaging, but can be used for other tomographs or studies. We first developed an iterative reconstruction algorithm including a stationary and isotropic model of resolution in image space, experimentally measured. We evaluated the impact of such a model of resolution in Monte-Carlo simulations, physical phantom experiments and in two clinical studies by comparing our algorithm with a reference reconstruction algorithm. This study suggests that biases due to partial volume effects are reduced, in particular in the clinical studies. Better spatial and temporal correlations are also found at the voxel level. However, other methods should be developed to further reduce data noise. We then proposed a maximum a posteriori denoising algorithm that can be used for dynamic data to denoise temporally raw data (sinograms) or reconstructed images. The a priori modeled the coefficients in a wavelet basis of all the signals without noise (in an image or sinogram). We compared this technique with a reference denoising method on replicated simulations. This illustrates the potential benefits of our approach of sinogram denoising.ORSAY-PARIS 11-BU Sciences (914712101) / SudocSudocFranceF

    Deep learning for a space-variant deconvolution in galaxy surveys

    No full text
    International audienceThe deconvolution of large survey images with millions of galaxies requires developing a new generation of methods that can take a space-variant point spread function into account. These methods have also to be accurate and fast. We investigate how deep learning might be used to perform this task. We employed a U-net deep neural network architecture to learn parameters that were adapted for galaxy image processing in a supervised setting and studied two deconvolution strategies. The first approach is a post-processing of a mere Tikhonov deconvolution with closed-form solution, and the second approach is an iterative deconvolution framework based on the alternating direction method of multipliers (ADMM). Our numerical results based on GREAT3 simulations with realistic galaxy images and point spread functions show that our two approaches outperform standard techniques that are based on convex optimization, whether assessed in galaxy image reconstruction or shape recovery. The approach based on a Tikhonov deconvolution leads to the most accurate results, except for ellipticity errors at high signal-to-noise ratio. The ADMM approach performs slightly better in this case. Considering that the Tikhonov approach is also more computation-time efficient in processing a large number of galaxies, we recommend this approach in this scenario

    Convergent ADMM Plug and Play PET Image Reconstruction

    No full text
    In this work, we investigate hybrid PET reconstruction algorithms based on coupling a model-based variational reconstruction and the application of a separately learnt Deep Neural Network operator (DNN) in an ADMM Plug and Play framework. Following recent results in optimization, fixed point convergence of the scheme can be achieved by enforcing an additional constraint on network parameters during learning. We propose such an ADMM algorithm and show in a realistic [ 18 F]-FDG synthetic brain exam that the proposed scheme indeed lead experimentally to convergence to a meaningful fixed point. When the proposed constraint is not enforced during learning of the DNN, the proposed ADMM algorithm was observed experimentally not to converge

    A highly precise shear bias estimator independent of the measured shape noise

    No full text
    International audienceWe present a new method to estimate shear measurement bias in image simulations that significantly improves the precision with respect to current techniques. Our method is based on measuring the shear response for individual images. We generated sheared versions of the same image to measure how the galaxy shape changes with the small applied shear. This shear response is the multiplicative shear bias for each image. In addition, we also measured the individual additive bias. Using the same noise realizations for each sheared version allows us to compute the shear response at very high precision. The estimated shear bias of a sample of galaxies is then the average of the individual measurements. The precision of this method leads to an improvement with respect to previous methods concerned with the precision of estimates of multiplicative bias since our method is not affected by noise from shape measurements, which until now has been the dominant uncertainty. As a consequence, the method does not require shape-noise suppression for a precise estimation of shear multiplicative bias. Our method can be readily used for numerous applications such as shear measurement validation and calibration, reducing the number of necessary simulated images by a few orders of magnitude to achieve the same precision.Key words: gravitational lensing: weak / methods: data analysis / methods: observational / methods: statistical / cosmology: observations / dark matte
    corecore